|
In mathematics, the idea of least squares can be applied to approximating a given function by a weighted sum of other functions. The best approximation can be defined as that which minimises the difference between the original function and the approximation; for a least-squares approach the quality of the approximation is measured in terms of the squared differences between the two. ==Functional analysis== A generalization to approximation of a data set is the approximation of a function by a sum of other functions, usually an orthogonal set:〔 〕 : with the set of functions an orthonormal set over the interval of interest, : see also Fejér's theorem. The coefficients are selected to make the magnitude of the difference ||||2 as small as possible. For example, the magnitude, or norm, of a function over the can be defined by:〔 〕 : where the ‘ *’ denotes complex conjugate in the case of complex functions. The extension of Pythagoras' theorem in this manner leads to function spaces and the notion of Lebesgue measure, an idea of “space” more general than the original basis of Euclidean geometry. The satisfy orthonormality relations:〔 〕 : where ''δij'' is the Kronecker delta. Substituting function into these equations then leads to the ''n''-dimensional Pythagorean theorem:〔 〕 : The coefficients making ||''f'' − ''f''n''||2 as small as possible are found to be:〔 : The generalization of the ''n''-dimensional Pythagorean theorem to ''infinite-dimensional '' real inner product spaces is known as Parseval's identity or Parseval's equation.〔 〕 Particular examples of such a representation of a function are the Fourier series and the generalized Fourier series. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Least-squares function approximation」の詳細全文を読む スポンサード リンク
|